Adversarial Robust Aerial Image Recognition Based on Reactive-Proactive Defense Framework with Deep Ensembles

نویسندگان

چکیده

As a safety-related application, visual systems based on deep neural networks (DNNs) in modern unmanned aerial vehicles (UAVs) show adversarial vulnerability when performing real-time inference. Recently, ensembles with various defensive strategies against samples have drawn much attention due to the increased diversity and reduced variance for their members. Aimed at recognition task of remote sensing images (RSIs), this paper proposes use reactive-proactive ensemble defense framework solve security problem. In reactive defense, we fuse scoring functions several classical detection algorithms hidden features average output confidences from sub-models as second fusion. terms proactive attempt two strategies, including enhancing robustness each sub-model limiting transferability among sub-models. practical applications, RSIs are first input part, which can detect reject RSIs. The accepted ones then passed robust defense. We conduct extensive experiments three benchmark RSI datasets (i.e., UCM, AID, FGSC-23). experimental results that method performs very well gradient-based attacks. analysis applicable attack scenarios is also helpful field. perform case study whole black-box scenario, highest rate reaches 93.25%. Most be rejected advance or correctly recognized by enhanced ensemble. This article one combine defenses attacks context DNN-based UAVs.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adversarial Attacks on Image Recognition

The purpose of this project is to extend the work done by Papernot et al. in [4] on adversarial attacks in image recognition. We investigated whether a reduction in feature dimensionality can maintain a comparable level of misclassification success while increasing computational efficiency. We formed an attack on a black-box model with an unknown training set by forcing the oracle to misclassif...

متن کامل

Robust Deep Reinforcement Learning with Adversarial Attacks

This paper proposes adversarial attacks for Reinforcement Learning (RL) and then improves the robustness of Deep Reinforcement Learning algorithms (DRL) to parameter uncertainties with the help of these attacks. We show that even a naively engineered attack successfully degrades the performance of DRL algorithm. We further improve the attack using gradient information of an engineered loss func...

متن کامل

Stochastic Activation Pruning for Robust Adversarial Defense

Neural networks have been found vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory. We cast the problem as a minimax zero-sum game between the adversary and the model...

متن کامل

Adversarial Examples Generation and Defense Based on Generative Adversarial Network

We propose a novel generative adversarial network to generate and defend adversarial examples for deep neural networks (DNN). The adversarial stability of a network D is improved by training alternatively with an additional network G. Our experiment is carried out on MNIST, and the adversarial examples are generated in an efficient way compared with wildly-used gradient based methods. After tra...

متن کامل

Adversarial Example Defense: Ensembles of Weak Defenses are not Strong

Ongoing research has proposed several methods to defend neural networks against adversarial examples, many of which researchers have shown to be ineffective. We ask whether a strong defense can be created by combining multiple (possibly weak) defenses. To answer this question, we study three defenses that follow this approach. Two of these are recently proposed defenses that intentionally combi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Remote Sensing

سال: 2023

ISSN: ['2315-4632', '2315-4675']

DOI: https://doi.org/10.3390/rs15194660